Goto

Collaborating Authors

 attention and generalization


Understanding Attention and Generalization in Graph Neural Networks

Neural Information Processing Systems

We aim to better understand attention over nodes in graph neural networks (GNNs) and identify factors influencing its effectiveness. We particularly focus on the ability of attention GNNs to generalize to larger, more complex or noisy graphs. Motivated by insights from the work on Graph Isomorphism Networks, we design simple graph reasoning tasks that allow us to study attention in a controlled environment. We find that under typical conditions the effect of attention is negligible or even harmful, but under certain conditions it provides an exceptional gain in performance of more than 60% in some of our classification tasks. Satisfying these conditions in practice is challenging and often requires optimal initialization or supervised training of attention. We propose an alternative recipe and train attention in a weakly-supervised fashion that approaches the performance of supervised models, and, compared to unsupervised models, improves results on several synthetic as well as real datasets. Source code and datasets are available at https://github.com/bknyaz/graph


Reviews: Understanding Attention and Generalization in Graph Neural Networks

Neural Information Processing Systems

UPDATE: I have increased the score to 6 as long as the authors will revise the paper as promised in the responses. This paper has more than one topic being discussed. It at the first part talks mostly about the attention mechanism, and in the second section it introduces a new model ChebyGIN, then in the third section it proposed a weakly-supervised attention training approach. Overall, the paper is not all about its title "Understanding Attention in Graph Neural Networks". In 2.3 the paper says "the performance of both GCNs and GINs is quite poor and, consequently, it is also hard for the attention subnetwork to learn", thus it proposes ChebyGIN as a stronger model.


Reviews: Understanding Attention and Generalization in Graph Neural Networks

Neural Information Processing Systems

This paper explores node-wise attention in graph neural networks, with the aim of characterizing when it works well. The authors demonstrate that attention often affords only marginal benefits. They propose a weakly supervised regime that tends to improve performance. The experiments are thorough and presented well. Reviewers have highlighted some presentation issues that should be addressed in future versions of the manuscript.


Understanding Attention and Generalization in Graph Neural Networks

Neural Information Processing Systems

We aim to better understand attention over nodes in graph neural networks (GNNs) and identify factors influencing its effectiveness. We particularly focus on the ability of attention GNNs to generalize to larger, more complex or noisy graphs. Motivated by insights from the work on Graph Isomorphism Networks, we design simple graph reasoning tasks that allow us to study attention in a controlled environment. We find that under typical conditions the effect of attention is negligible or even harmful, but under certain conditions it provides an exceptional gain in performance of more than 60% in some of our classification tasks. Satisfying these conditions in practice is challenging and often requires optimal initialization or supervised training of attention.


Understanding Attention and Generalization in Graph Neural Networks

Knyazev, Boris, Taylor, Graham W., Amer, Mohamed

Neural Information Processing Systems

We aim to better understand attention over nodes in graph neural networks (GNNs) and identify factors influencing its effectiveness. We particularly focus on the ability of attention GNNs to generalize to larger, more complex or noisy graphs. Motivated by insights from the work on Graph Isomorphism Networks, we design simple graph reasoning tasks that allow us to study attention in a controlled environment. We find that under typical conditions the effect of attention is negligible or even harmful, but under certain conditions it provides an exceptional gain in performance of more than 60% in some of our classification tasks. Satisfying these conditions in practice is challenging and often requires optimal initialization or supervised training of attention.